Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
translated by 谷歌翻译
几项研究在经验上比较了各种模型的分布(ID)和分布(OOD)性能。他们报告了计算机视觉和NLP中基准的频繁正相关。令人惊讶的是,他们从未观察到反相关性表明必要的权衡。这重要的是确定ID性能是否可以作为OOD概括的代理。这篇简短的论文表明,ID和OOD性能之间的逆相关性确实在现实基准中发生。由于模型的选择有偏见,因此在过去的研究中可能被错过。我们使用来自多个训练时期和随机种子的模型展示了Wilds-Amelyon17数据集上模式的示例。我们的观察结果尤其引人注目,对经过正规化器训练的模型,将解决方案多样化为ERM目标。我们在过去的研究中得出了细微的建议和结论。 (1)高OOD性能有时确实需要交易ID性能。 (2)仅专注于ID性能可能不会导致最佳OOD性能:它可能导致OOD性能的减少并最终带来负面回报。 (3)我们的示例提醒人们,实证研究仅按照现有方法来制定制度:在提出规定的建议时有必要进行护理。
translated by 谷歌翻译
图像文本匹配(ITM)是评估视觉和语言(VL)模型的常见任务。但是,现有的ITM基准有一个重大限制。他们有许多缺失的信件,源自数据构建过程本身。例如,标题仅与一个图像匹配,尽管标题可以与其他类似图像匹配,反之亦然。为了纠正大规模的虚假负面因素,我们通过提供与机器和人类注释者的缺失关联来构建扩展的可可验证(ECCV)标题数据集。我们在注释过程中采用五个具有不同属性的最先进的ITM模型。与原始的MS-Coco相比,我们的数据集提供了X3.6的X3.6积极图像到支撑关联和X8.5字幕到图像关联。我们还建议使用基于等级的公制映射@r,而不是流行的召回@k(r@k)。我们在现有和拟议的基准测试中重新评估了现有的25个VL模型。我们的发现是现有的基准测试,例如可可1K r@k,可可5k r@k,cxc r@1彼此高度相关,而当我们转移到eccv map@r时,排名会改变。最后,我们深入研究机器注释者选择引入的偏差的效果。源代码和数据集可从https://github.com/naver-ai/eccv-caption获得
translated by 谷歌翻译
Vision Transformer (ViT) extends the application range of transformers from language processing to computer vision tasks as being an alternative architecture against the existing convolutional neural networks (CNN). Since the transformer-based architecture has been innovative for computer vision modeling, the design convention towards an effective architecture has been less studied yet. From the successful design principles of CNN, we investigate the role of spatial dimension conversion and its effectiveness on transformer-based architecture. We particularly attend to the dimension reduction principle of CNNs; as the depth increases, a conventional CNN increases channel dimension and decreases spatial dimensions. We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model. We show that PiT achieves the improved model capability and generalization performance against ViT. Throughout the extensive experiments, we further show PiT outperforms the baseline on several tasks such as image classification, object detection, and robustness evaluation. Source codes and ImageNet models are available at https://github.com/naver-ai/pit.
translated by 谷歌翻译
弱监督的对象本地化(WSOL)在过去几年中获得了普及,以便培训具有图像级标签的本地化模型。由于Soliminal WSOL类激活映射(CAM),该领域的重点是如何扩展注意区域更广泛地覆盖物体并更好地本地化。但是,这些策略依赖于验证超参数和模型选择的完全本地化监督,这是原则上禁止WSOL设置。在本文中,我们认为WSOL任务仅用图像级标签均不含糊,并提出了一种新的评估协议,其中全面监督仅限于仅与测试集没有重叠的小型举出的设置。我们观察到,根据我们的协议,五种最新的WSOL方法没有对CAM基线进行重大改进。此外,我们报告说,现有的WSOL方法尚未达到几次学习基准,其中验证时间的全面监督用于模型培训。根据我们的调查结果,我们讨论了WSOL的​​一些未来方向。
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
最近的研究通过将基于Trimap的图像垫子的成功扩展到视频域,在视频垫子上取得了长足进展。在本文中,我们将此任务推向了更实用的设置,并提出了仅使用一个用户宣传的Trimap来强制执行视频底表的单个TRIMAP视频效果网络(OTVM)。 OTVM的一个关键是Trimap传播和α预测的关节建模。从基线构架传播和α预测网络开始,我们的OTVM将两个网络与alpha-Trimap修补模块结合在一起,以促进信息流。我们还提出了一种端到端培训策略,以充分利用联合模型。与先前的解耦方法相比,我们的联合建模极大地提高了三张式传播的时间稳定性。我们在两个最新的视频底变基准测试中评估了我们的模型,深度视频垫子和视频图108,以及优于大量利润率的最先进(MSE改善分别为56.4%和56.7%)。源代码和模型可在线获得:https://github.com/hongje/otvm。
translated by 谷歌翻译
基于生成对抗网络(GAN-IT)的图像翻译是在胸部X射线图像(AL-CXR)中精确定位异常区域的一种有前途的方法。但是,异质的未配对数据集破坏了现有的方法来提取关键特征并将正常与异常情况区分开,从而导致不准确和不稳定的Al-CXR。为了解决这个问题,我们提出了涉及注册和数据增强的两阶段gan-it的改进。对于第一阶段,我们引入了一种可逆的基于学习的注册技术,该技术实际上和合理地将未配对的数据转换为配对数据以进行学习注册图。这种新颖的方法可实现高注册性能。在第二阶段,我们将数据扩展应用于均匀注册框架上的左右肺区域来多样化异常位置,从而通过减轻显示左和右肺病变的数据分布的不平衡来进一步改善性能。我们的方法旨在应用于现有的GAN-IT模型,从而使现有的体系结构受益于翻译的关键功能。通过证明应用AL-CXR的性能在应用提出的方法时均匀提高,我们认为即使学习数据稀缺,也可以在临床环境中部署Al-CXR的GAN-IT。
translated by 谷歌翻译
预计未来几十年的全球粮食不安全将加速气候变化率和人口迅速增加。在这种静脉中,重要的是在每种饮食生产水平上消除效率低下。最近深入学习的进步可以帮助降低这种效率低下,但他们的申请尚未成为整个行业的主流,以大规模的规模诱导经济成本。为此,已将现代技术(如CNNS(卷积神经网络)应用于RPQD(原始产生质量检测)任务。另一方面,变压器在其他方式中的视野中的成功首次亮相使我们能够在RPQD中预计这些基于变压器的模型更好的性能。在这项工作中,我们专门调查了最近的最先进的水流(移位的Windows)变压器,这些变压器可以在窗口和窗口间的方式中计算自我关注。我们将Swin变压器与CNN模型进行比较四个RPQD图像数据集,每个CNN模型都包含不同种类的生成:水果和蔬菜,鱼类,猪肉和牛肉。我们观察到Swin Transformer不仅实现了更好或更有竞争力的性能,而且还具有数据和计算效率,使其成为现实世界的实际部署的理想选择。据我们所知,这是第一个对RPQD任务的大规模实证研究,我们希望在未来的作品中更加关注。
translated by 谷歌翻译
数据稀缺和噪声是机器学习工业应用中的重要问题。然而,设计可扩展和广义的方法往往挑战,以解决具有黑盒式模型的数据集的基本分布和语义特性。因此,以数据为中心的方法对于机器学习操作管道的自动化至关重要。为了充当这种自动化的基础,我们建议一个用于改进图像分类问题中数据质量的域名不可知的管道。该管道包含数据估值,清洁和增强。通过这些方法的适当组合,我们只能在数据中心AI竞争中达到84.711%的测试精度(最荣誉在最具创新性中提及)。
translated by 谷歌翻译